专利摘要:
The invention relates to an image processing method involving: receiving, by a processing device, an input image (IB) picked up by a matrix of infrared-sensitive pixels; determining, based on a column component vector (VCOL) representing a variation between columns, a first scale factor (a) by estimating the level of the column-to-column variation present in the input image; generating column offset values (α.VCOL (y)) based on the product of the first scale factor by the values of the vector; determining, based on a 2D dispersion matrix (IDISP) representing a 2D dispersion, a second scale factor (β) by estimating the level of said 2D dispersion present in the input image; generating pixel offset values (β, IDISP (x, y)) based on the product of the second scale factor by the values of the matrix; and generating a corrected image (IC ') by applying pixel and column offset values.
公开号:FR3020735A1
申请号:FR1453917
申请日:2014-04-30
公开日:2015-11-06
发明作者:Amaury Saragaglia;Alain Durand
申请人:Ulis SAS;
IPC主号:
专利说明:

[0001] TECHNICAL FIELD The present invention relates to the field of infrared image sensors, and in particular to a method and a device for making offset and gain corrections in an image. captured by a matrix of pixels sensitive to infrared light. BACKGROUND In the field of infrared (IR) imaging, offset correction in captured images without using a shutter or the like is a significant challenge for uncooled infrared imaging devices such as as bolometers, and also for cooled infrared imaging devices. Such imaging devices include an array of infrared sensitive detectors forming a matrix of pixels. A spatial non-uniformity between the pixels of the pixel array, which causes the offset to be corrected in the image, varies not only in time but also as a function of temperature. This problem is generally addressed by using an internal mechanical shutter in the imaging device, and involves periodically sensing an image while the shutter is closed in order to obtain a reference image of a scene. relatively uniform which can then be used for calibration. However, the use of a -. Shutter has several disadvantages, such as additional weight and cost, and brittleness of this component. In addition, for some applications, the use of a shutter is unacceptable because of the time that is lost while the shutter is closed and the calibration is performed. During this calibration period, no image of the scene can be captured. Image processing techniques for correcting the offset have been proposed as an alternative to using a shutter. However, the existing techniques are complex and / or do not sufficiently correct the image. There is therefore a need in the art for an improved offset correction method in an infrared image. SUMMARY One object of embodiments of the present invention is to at least partially solve one or more of the disadvantages of the state of the prior art. In one aspect, there is provided an image processing method comprising: receiving, by a processing device, an input image captured by a matrix of infrared-sensitive pixels, the pixel array having a plurality of pixel columns; each of which is associated with a reference pixel; determining, based on a column component vector representing a column-to-column variation introduced by the pixel array, a first scale factor by estimating the level of column variation present in the input image; generating column offset values based on the product of the first scale factor by the values of the vector; determining, based on a 2D dispersion matrix representing a 2D dispersion introduced by the pixel array, a second scale factor by estimating the level of the 2D dispersion present in the input image; B13214 3 generating pixel offset values based on the product of the second scale factor by the values of the matrix; and generating a corrected image by applying pixel and column offset values.
[0002] According to one embodiment, the method further comprises generating a partially corrected image based on the column offset values, wherein the second scale factor is generated based on the partially corrected image.
[0003] According to one embodiment, the column vector and the dispersion matrix are determined based on a reference image representing offsets introduced by the pixel array. According to one embodiment, the corrected image is generated based on the equation: ## EQU1 ## where IB (x, y) ) is the input image, a is the first scale factor, IcoL (x, y) is a matrix comprising in each of its rows the column vector, f is the second scale factor, and IDIsp ( x, y) is the dispersion matrix.
[0004] According to one embodiment, the column vector represents the difference between a first column vector based on a first reference image taken at a first ambient temperature and a second column vector based on a second reference image taken at a second temperature. ambient; the dispersion matrix represents the difference between a first dispersion matrix based on the first reference image and a second dispersion matrix based on the second reference image. According to one embodiment, the corrected image is generated based on the equation = IB (x, y) -IT.2, L (x, y) -IDr ° sp (x, y) -a ( Ic 'QL (x, y)) - f3 (I, Isp (x, y)) where IB (x, y) is the input image, a is the first scale factor, OL (x, y ) is a matrix comprising in each of its lines, the first column vector, / c'oL (x, y) is a matrix equal to I 'COL- (0L35 el) where 01, is a matrix comprising in each of its B13214 4 rows the second column vector, p is the second scale factor, / gsp (x, y) is the first dispersion matrix and / Disp (x, y) is a matrix equal to I 'DISP- (Gsp -gsp) where irjsp is the second dispersion matrix.
[0005] According to one embodiment, the method further comprises determining at least one column residue offset value based on the corrected image. According to one embodiment, determining the at least one column residue offset value comprises: determining weights associated with at least some of the pixels of the corrected image, the weights being generated based on an estimate of the uniformity of the neighborhood of each of the at least some pixels; calculating, for each of the at least some pixels, the difference from a pixel value in the corresponding line of an adjacent column; and applying the weights to the differences and integrating the weighted differences to generate the at least one offset value of column residues. According to one embodiment, the estimate of the uniqueness of the neighborhood of each of the at least some pixels is based on a gradient value and a horizontal variance value calculated for each neighborhood. According to one embodiment, the determination of the first scale factor comprises: applying a high-pass filter along the lines of the image; apply the high-pass filter to the column vector; and determining column averages of the filtered image, the first scale factor being determined based on minimizing the differences between the column means of the filtered image and the filtered values of the column vector. According to one embodiment, the first scale factor a is determined based on the following equation: Ex --in Ey T (IB (x, y)) x T (icoL (x))) 1 a = Ex 7 '(Vco (x)) x T (licol, (x)) B13214 where T () represents a high pass filter applied to the Van column vector and the lines of the input image IB (x, y). According to one embodiment, the determination of the second scale factor comprises: determining, for each pixel of the input image and for each element of the dispersion matrix, a gradient value based on at least one adjacent pixel, the second scale factor being determined based on minimizing the difference between the gradients of the input image and the gradients of the dispersion matrix. According to one embodiment, the second scaling factor p is determined based on the following equation: rz = E (Vx / B Vx / D / sp + 73 / 1B-V ylDISP) E (C xIDISP) 2 (VyIDISP) 2) where IB is the input image, IDIsp is the dispersion matrix, Vx is the value of the pixel gradient between adjacent pixels in the direction of the lines in the input image, and Vyest the pixel gradient value in the direction of the columns in the input image. According to one embodiment, the offset values of columns and pixels are applied to an additional input image. According to one embodiment, the method further comprises determining an image-based gain correction value corrected by minimizing the variance between the image multiplied by a gain and the gain. In another aspect, there is provided an image processing device comprising: a memory storing a column vector representing a column-to-column variation introduced by the pixel array, and a dispersion matrix representing a 2D dispersion introduced by the array of pixels; pixels; a processing device adapted to: receive an input image captured by a matrix of infrared-sensitive pixels, the pixel array having a plurality of pixel columns each of which is associated with a reference pixel; B13214 6 determining, based on the column vector, a first scale factor by estimating a level of column variation present in the input image; generating column offset values based on the product of the first scale factor by the values of the vector; determining, based on the dispersion matrix, a second scale factor by estimating the level of the 2D dispersion present in the input image; generating pixel offset values based on the product of the second scale factor by the values of the matrix; and generating a corrected image by applying the offset values of columns and pixels. Brief Description of the Drawings The above-mentioned and other features and advantages will become apparent from the following detailed description of embodiments, given by way of illustration and not limitation, with reference to the accompanying drawings in which: FIG. 1 schematically illustrates an imaging device according to an exemplary embodiment; FIG. 2 schematically illustrates, in greater detail, a portion of a pixel array of the imaging device of FIG. 1 according to an exemplary embodiment; Figure 3 illustrates schematically, in more detail, an image processing block of the imaging device of Figure 1 according to an exemplary embodiment; Fig. 4 is a flowchart illustrating steps of an image processing method according to an exemplary embodiment; Fig. 5 is a flowchart illustrating steps of an image processing method according to another embodiment; Fig. 6 is a flowchart illustrating steps of a method of removing column residues according to an exemplary embodiment; and Fig. 7 is a flowchart illustrating in more detail the steps of Fig. 6 according to an exemplary embodiment. DETAILED DESCRIPTION While some of the embodiments of the following description are described in relation to a microbolometer-type pixel array, those skilled in the art will appreciate that the methods described herein apply equally to other types of devices. infrared imaging, including cooled devices. FIG. 1 illustrates an infrared imaging device 100 comprising a matrix 102 of pixels sensitive to infrared light. For example, in some embodiments, the pixel array is responsive to long wavelength infrared light, such as light having a wavelength of 7 to 13 gm. To simplify the illustration, a matrix 102 of only 144 pixels 104, arranged in 12 rows and 12 columns, is illustrated in FIG. 1. In other embodiments, the pixel array 102 may comprise any number of rows. and pixel columns. Typically, the matrix comprises for example 640 by 480, or 1024 by 768 pixels. Each column of pixels of the matrix 102 is associated with a corresponding reference structure 106. Although not functionally an imaging element, this structure will be referred to herein as a "reference pixel" by structural analogy with the imaging (or active) pixels 104. In addition, an output block 108 is coupled to each column of the pixel array 102 and each reference pixel 106, and provides a raw image IB. For example, a control circuit 110 supplies control signals to the pixel matrix, to the reference pixels 106, and to the output block 108. The raw image IB is, for example, supplied to an image processing unit. 112, which B13214 8 applies offsets and gains to the pixels of the image to produce a corrected image Ic. Figure 2 illustrates in more detail two columns C1 and C2 of the pixel array 102 and their associated reference pixels and output circuits according to an example in which the imaging device is a microbolometer. Each column of pixels of the pixel array 102 is coupled to a corresponding column line 202. For example, each pixel 104 comprises a switch 204, a transistor 206 and a bolometer 208 coupled in series between the corresponding column line 202 and a ground node. Bolometers are well known in the art, and include, for example, a membrane suspended over a substrate, comprising a layer of an infrared absorbing material and having the property that its resistance is modified by an increase. temperature of the membrane due to the presence of infrared radiation. The switches 204 of the pixels of each row are controlled, for example, by a common selection signal. For example, in FIG. 2, a first row of pixels is controlled by a control signal S1, and a second row of pixels is controlled by a control signal S2. The transistors 206 are, for example, NMOS transistors receiving on their gates a bias voltage VFID 25 for controlling the potential drop in the active bolometers, by inducing a stable voltage on one of their terminals, the other terminal being to the mass. The reference pixel 106 associated with each column comprises a blind transistor 212 and bolometer 212 coupled in series between the corresponding column line 202 and a VSK skimming voltage. The VSK skimming voltage sets the greatest potential of the bolometer bridge formed by the active and reference pixels by inducing a stable voltage on one of the terminals of the reference bolometer. Transistor 210 is, for example, a PMOS transistor receiving on its gate a GSK bias voltage for controlling the potential drop in the blind bolometer by inducing a stable voltage on the other terminal of the blind bolometer. The blind bolometers 212 have, for example, a structure similar to that of the active bolometers 208 of the pixel matrix, but are rendered insensitive to radiation from the scene, for example by a screen formed of a reflecting barrier and / or by structural heat absorption, for example by providing the substrate with high thermal conductivity, the bolometer being formed, for example, in direct contact with the substrate. Each column line 202 is further coupled to an output circuit forming part of the output block 108 of FIG. 1. In the example of FIG. 2, each output circuit comprises a capacitive transimpedance amplifier (CTIA) 214 having its negative input terminal coupled to the corresponding column line 202, and its positive input terminal receiving a reference voltage VBus. The output of amplifier 214 provides an output voltage VOUT of the column. A capacitor 216 and a switch 218 are coupled in parallel between the negative input terminal and the output terminal of the amplifier 214. During a reading step of the pixel matrix 102, the rows of pixels are read, for example one by one, by activating and deactivating the switch 218 of each output circuit of the output block 108 to reset the voltage on the capacitor 216, and activating the appropriate selection signal S1, S2, etc. of the row to read. The difference between the current Icomp in the reference pixel and the current 1m in the active pixel is integrated by the capacitor 216 during a finite integration time to produce an output voltage VOUT representing the value of the pixel. Figure 3 illustrates in more detail the image processing block 112 of Figure 1 according to an exemplary embodiment.
[0006] The functions of the image processing block 112 are, for example, implemented by software, and the image processing block 112 comprises a processing device 302 comprising one or more processors controlled by stored instructions. in an instruction memory 304. In other embodiments, the functions of the image processing block 112 may be implemented at least partially by dedicated hardware. In this case, the processing device 302 comprises, for example, an Application Specific Integrated Circuit (ASIC) or field-programmable gate array (FPGA) and the instruction memory 304 may be omitted. The processing device 302 receives the raw input image IB, and generates the corrected image Ic, which is for example supplied to a display (not shown) of the imaging device. The processing device 302 is also coupled to a data memory 306 storing a Vcol vector, representing a column-to-column structural variation introduced by the pixel array 102, and an IDIsp matrix representing a non-column-bound 2D structural dispersion introduced by the The variation between columns mainly results, for example, from the use of reference pixels 106 in each column, while the row of column reference pixels is generally not perfectly uniform. Non-column-bound 2D dispersion results mainly from, for example, local and / or structural physical differences between the active bolometers of the resulting pixel array, for example, dispersions of the technological process. The data memory 306 also stores, for example, a gain matrix, discussed in more detail below. Fig. 4 is a flowchart illustrating steps of an image processing method according to an exemplary embodiment. This method is carried out, for example, by the image processing block 112 described hereinafter.
[0007] It is assumed that a raw image IB has been captured by the pixel array 102 of FIG. 1, and that this pixel array is such that each column of the array is associated with a corresponding reference pixel.
[0008] In addition, it is assumed that the column vector Vcol, and the 2D dispersion matrix Insp are available. The VCOL column vector and the 2D dispersion matrix Insp are generated based, for example, on a single reference image IREF (x, y) picked up during an initial start-up phase of the imaging device. The reference image is, for example, that which has been sensed in front of a black body or a uniform emission scene and at a controlled temperature. Mainly for the purpose of canceling the temporal noise, the reference image is obtained, for example, by averaging several images, for example about 50 images. This reference image IREF (x, Y) is considered, for example, as having column structural components and 2D dispersion according to the following relation: IREF (Xe Y) = ICOL (XP y) 01Disp (x, where IcoL ( x, y) is a matrix representing the reference variations between the columns of the pixel matrix, and IDISP (x, Y) is the matrix representing the reference 2D dispersion for each pixel of the pixel matrix. can be represented by the vector VCOL, which has a length equal to the number n of columns in the image For example, the matrix IcoL (x, y) has a number m of rows equal to the number of rows of the image Each generation of the vector VCOL involves, for example, the processing of the reference image IREF to extract the column component, which is equal, for example, for each column to the average of the values of the pixels of the column The generation of the IDI matrix SP involves, for example, subtracting, at each pixel value of the reference image, the value of the corresponding column-VCOL (x) variation.
[0009] In a first step 401 of FIG. 4, a scale factor a is determined, for the raw image IB, representing the level of the variation between columns Vcol in the image IB. In one embodiment, the value of the scale factor a is determined based on a minimization of the sum of the ICOL column components in the image IB. This can be expressed by the following minimization function: a = ar g minaeR1 (113 (x, y) - ax IcoL (X, y)) 2 x ', 31 where argminaeRf (cx) is the argument for which the function data reaches its minimum value. As indicated above, the ICOL matrix (x, y) can be represented as a VoeL (x) vector, which defines a single value for each column. To facilitate the resolution of this minimization problem, and also to simplify the calculation, the raw image IB is also stored, for example, put in the form of a vector by taking an average of each column. In addition, high-pass filtering is applied, for example, horizontally in the image, in other words in the row direction, and also to the VCOL (x) vector. The minimization problem is then solved, for example, based on the average of the columns of the transformed image, as follows: a = ar g minae m IF (1IT (IB (x, y))) - x T ( VcoL (x)) 1 xy where T () is a high pass filter applied to the column vector Vcol, and to the input image IB (x, y), and m is the number of rows of the picture. For example, the filtering function is defined by T (X) = X * hp, in other words the convolution of the matrix X with a horizontal high pass filter hp. In one example, the filter function hp is defined by the coefficients [0.0456 -0.0288 -0.2956 0.5575 -0.2956 -0.0288 0.0456], applied to a central pixel at 1 inside a local window according to the dimension x. More generally, the filter is, for example, a high-pass filter adapted to sharpen the vertical edges of the image, in other words to extract the column noise. The minimization problem is solved, for example, based on the following direct solution for the scaling factor a: a = Ex T (7c0L (x)) x T (17c0L (x)) Ex GT, Ey T (/ B Ex, y)) x T (vcoL (x))) In other words, the determination of the scaling factor a involves, for example, applying the high pass filter to the raw image according to its rows and also to the vector of reference columns; determining the averages of the columns of the filtered image, resulting in a vector of the same size as the reference column vector; and then determining the scale factor as the minimization of the difference between the two column vectors, i.e., between the averages of the columns of the filtered image and the filtered column vector. As will be described in more detail below, the scale factor a makes it possible to determine for the image 20 column offset values oc.IcoL (x, y). Referring again to Figure 4, at a subsequent step 402, a scale factor p is determined, for the raw image IB, representing the contribution level of the reference 2D dispersion component Insp in the image. . To determine the scale factor p, it is assumed that the captured images are natural images, having for example natural scene statistics, and that large local variations between pixels, i.e. variations between one pixel and its neighbors, are the result of a fixed 2D dispersion. A value of the scale factor p is determined, for example, so that it reduces the impact of this dispersion on the entire image. The approach adopted is, for example, to minimize the total variation (TV) in the image, based on the following equation: B13214 14 x, y where V () is the pixel gradient value. As a good approximation of the minimization problem, this is treated, for example, as the minimization of: f3 = arg B - x iDisp) 12 Such a minimization problem is solved for example based on the following direct solution for the factor d p: E (x7x43.vAmsp-FVy43.X7y / Ensp) fi = / C (Vx1DISP) 2 + (VyIDISP) 2) where Vx is the value of the pixel gradient between adjacent pixels in a horizontal direction of the image, in other words along each row, and Vy is the pixel gradient value between adjacent pixels in a vertical direction of the image, in other words along each column.
[0010] Step 402 therefore involves determining, for each pixel of the raw image and for each element of the reference 2D dispersion matrix, a gradient value based on at least one adjacent pixel; and determining the scaling factor p based on minimizing the difference between the gradients of the raw input image and the gradients of the 2D reference dispersion matrix. In this example, the scale factor p is determined based on the raw image IB. However, in other embodiments, the scale factor F3 can be determined based on an image after the offsets of columns a-ICOL (x, y) have been removed. The direct solution for the scaling factor p therefore becomes E (Vxicc Vx / D / sp + V ylCC V ylDISP) E (C7 xIDISP) 2 y1DISP) 2) where ICC is the image in which each pixel (x, y ) has been corrected, for example, based on the equation: = arg min T11 (IB (fi)) = argminD (IB - / 3 x / Disp) 1 f B13214 15 / cc (x, y) = 4 (x, y) - ax IcoL (X, y) where IcoL (x, y) is the matrix comprising, in each of its rows, the column vector VcoL (x). In some embodiments, in the calculation of operation 402, only pixel gradient values that are below a high threshold are taken into account. Indeed, it is assumed, for example, that the very strong gradients correspond to the edges associated with the scene in the image. For example, this high threshold is chosen to be about three times the largest calculated gradient value for the IDISP reference dispersion matrix. At a next step 403, a corrected image Ic 'is generated based on the values of offset of columns equal to a.IcoL (x, y), and pixel offset values equal to p.IDIsp (x, y). For example, the corrected image is calculated by: Here = (x, y) alcoL (x, -l3IDISP (x, Y) In the embodiment described above, the corrected image is based on the Icol components, and Dm taken from a single reference image IREF- In other embodiments, in order to increase the accuracy of the reference correction, instead of a single reference image, two reference images have, for example, been captured at different detection temperatures TO and T1 respectively, and the device stores for example in the memory 306 of FIG. 3 a first set of wire and ap components based on the reference image taken at TO, and a second set of components / il and Msp based on the reference image taken at T1 In such a case, the values of the scalars a and p are determined, for example, using these components by directly subtracting the first component of the input image IB (x, y) and, by for example, calculating a and p in the same manner as described with respect to steps 401 and 402, but based on a structural component of columns ICOL = (OL-Q) and a dispersion component 2D B13214 16 I'DISP = (IDisp-OEsp). The corrected image Ic 'is then determined, for example, by: = 1B (x, y) - IML (x, y) - Ir1 · Bp (x, y) - a (4.01, (x, y)) - sp (x, y)) In some embodiments, steps 401 to 403 may provide sufficient offset correction, in which case the corrected image Ic 'provides the image Ic at the output of the processing block of In other embodiments, an additional step 404 and / or an additional step 405 are for example carried out subsequently to generate the final image I. In step 404, a gain correction is, for example, applied to the image by calculating an image correction factor y from the corrected image Ic 'before subtracting it from this image and multiplying each pixel value of the result by the gain matrix. For example, this gain correction is based on the following minimization problem: y = arg min (var (Gain x Ic '- yx Gain)) The direct solution for the image correction factor y therefore becomes: Sheath x rc - Gain X 1 'c. X Var gain (Gain) where X is the average of the matrix X, and Gain is the gain matrix, which is stored for example in the memory 306 with the vector Vcol, and the matrix IDIsp. Such a gain matrix is determined, for example, as is known to those skilled in the art, from the difference between two reference images picked up in front of a uniform source, such as a blackbody, at two temperatures. different. A corrected image IC "is then generated based on the multiplication of the gain-corrected image For example, the corrected image by 1" c is computed by: / c "= - y) x Gain 20 y In step 405, for example, the offset residues, such as column residues and / or dispersion residues remaining in the image, are suppressed.Figure 5 is a flowchart illustrating steps of FIG. An offset correction method in an image according to a further exemplary embodiment As represented by a processing block (CALCULATE a) 502, the input image IB and the column vector VCOL are used, for example to generate scaling factor a The vector VCOL is then multiplied by the scaling factor a to generate the offset values of columns a.VcoL (y) These offset values are then subtracted from the image raw IB to provide the corrected ICC image of the variations between columns. As represented by a processing block (CALCULATE (3) 504, the corrected ICC image of the variations between columns and the dispersion matrix Insp are for example used to generate the scaling factor p. The matrix Insp is then multiplied by the scale factor p to generate the pixel offset values p.IDIsp (x, y). These offset values are then subtracted from the ICC image to generate the corrected image here. In addition, the offset values a.VcoL (y) and P-Insp (x, Y) are, for example, added to each other to provide the offset values of the reference image (OFFSET VALUES 506). Although not shown in FIG. 5, the gain correction as described above in relation to step 404 of FIG. 4 is then optionally applied to FIG. corrected image Ic ', and the image correction factor is applied to the image. As represented by a processing block 508, correction of the column residues is then performed, for example based on the corrected image Ic ', in order to generate the correction values of the B13214 18 column residues. (COLUMN RESIDUES VALUES) 510, and a corrected IOR image of the column residues. Indeed, certain offsets associated with the column circuit of each column may still be present in the image, possibly resulting in vertical bands visible in the image. Correction of column residues is performed, for example, to remove these column artifacts. As represented by a processing block 512, a correction of the 2D dispersion residues is then performed for example based on the corrected IOR image at column residues to generate the offset values of the 2D dispersion residues (VALUES DISTRIBUTION RESIDUE SHIFTS 514. The correction of the dispersion residues is effected, for example, by using a smoothing filter which preserves the edges and reduces noise, such as an anisotropic diffuser filter. In the example of FIG. 5, offset values 506, 510 and 514 are summed to provide summed offset values SOV (x, y) to be applied to the image. These offset values SO (x, y) are then subtracted from the raw image I. However, an advantage in determining the summed shift values SOV (x, y) is that these offsets can be applied to a different image of the one used to generate them. For example, in the case where the raw image IB is an image of a video sequence captured by the imaging device, the offset values SOV (x, y) are subtracted for example from a next image of the sequence. This results in a reduction in the delay between capturing the raw image and obtaining the corrected image. In alternative embodiments, the offsets 506 of the reference image are applied to the current or next raw image, the gain correction is performed on the resulting image In ', and the values of the column offset residues. and dispersion are, for example, subsequently applied to the gain corrected image.
[0011] The column residue correction performed at step 508 of Fig. 5 will now be described in more detail with reference to the flow charts of Figs. 6 and 7.
[0012] Fig. 6 is a flowchart illustrating steps of a method for calculating column residue offsets to be applied to an image, for example to the corrected image Ic resulting from the method described above. These steps are implemented, for example, by the image processing block 112 of FIG. 3 previously described. At a first step 601, weights wx, y are calculated for each pixel of each column of the image, based on an estimate of the uniformity in a local area of the image defined by the vicinity of the pixel. The higher the uniformity, the higher the weight associated with the pixel, since it is most likely that this difference between columns results from structural (fixed) column residues. For example, the calculation is based, for each pixel, on a neighborhood of 9 by 3 pixels including the pixel in question, in other words on a rectangular area defined up to four pixels to the left and right of the pixel in question. and up to one pixel above and below the pixel in question. Uniformity is estimated, for example, by calculating a local gradient and a horizontal local variance for the neighborhood of each pixel. In some embodiments, the weights are calculated based on the image after the columns have been vertically filtered by a low pass filter, as will be described in more detail below. At a step 602, the average difference between each column and the next column is weighted by a corresponding weight, giving for each column a corresponding column-to-column offset value VCol (x). For example, the column-to-column shift vector V (.761, '(x) is determined based on the following equation: e VCo / w (x) = - L, wx, y (-1 x + 1, y - x, y) ni1Y = B13214 where wx, y are the weights calculated in step 601 above, Ix + 1, y is the value of the pixel in position y in the next column x + 1, and Ix.Y is the value of the pixel at position y in the current column x.In other words, each column-to-column shift value S7Colw (x) is determined as an average of the weighted differences between the pixel values of column x and column x + 1. The last column offset value in the row is, for example, set to 0. As an additional step, as the column-to-column offset represents only the forward shift between a column and the following, each term of the offset vector is integrated for example backward, that is to say starting with the penultimate term, each offset value of e columns being added cumulatively to the previous value until reaching the first term, to provide an OffCol (x) final column offset term as follows: ff Col (n) = 0, and Of fCol (x -1 ) = 0 ff Col (x) + - 1) These offset values provide, for example, the column residue offset values 510 of FIG. 5, which are, for example, added together with the other offsets to provide the summed shift values. SOV (x, y) to be subtracted from the raw image IB or a subsequent raw image. Alternatively, at a next step 603, the offset values are applied to the already corrected image IC 'based on the Vcol column variation components, and the 2D Insp dispersion, in order to generate an image. In addition, as shown by a dashed arrow in FIG. 6, in some embodiments steps 601 and 602 are repeated at least once based on the clean image generated for the purpose of removing even more of structural column residues. Figure 7 is a flowchart illustrating in more detail the steps of Figure 6 according to an exemplary embodiment.
[0013] B13214 As illustrated, step 601 of FIG. 6 for calculating weights based on the uniformity of the area in which the pixel is present involves, for example, sub-steps 701 through 704.
[0014] Step 701 involves applying a low-pass filter vertically along the columns of the image. Applying such a low-pass filter for example reduces the impact of the fixed pattern noise (FPN - "fixed pattern noise") relative to the image sensors. This filtering step is optional. The filter is for example a Gaussian filter defined by a core size wG and by its standard deviation a. In one example, the filter has a core size wG of 9 and a standard deviation of 2. Step 702 involves calculating a local gradian value Pv and a local horizontal variance value PvAR for each pixel in the image. . The local gradient Pv is for example determined based on the following equation: Pv = 20 Gx / 2 + Gy / 2 where Gx and Gy are the Sobel operators in x and y respectively, and I is the image, for example after being filtered in step 701. The horizontal local variance PvAR is for example determined based on the following equation: p VAR E (/ - / (f)) 2 f Es' where 8 is the horizontal neighborhood of the pixel considered, I is the average of the pixel values in the horizontal neighborhood and I (f) is the corresponding pixel value in the horizontal neighborhood. The horizontal neighborhood 8 has for example a size 2 * wV + 1 of at least three pixels and for example more than nine pixels, and can be generally defined as follows: 9 = x-wV, y IX-WV + 1 In step 703, a horizontal morphological dilation is performed, for example, on each of the local horizontal variance and local gradient values, based on for example on the following equations:) 3D / L_v = (Foe) pDIL_VAR = (pVAR0191) where (P®19 ') for a given pixel x is for example equal to max [13 (x), xEt91, and 8' is a horizontal neighborhood of pixels of the pixel x. The neighborhood 8 'for example has a size 2 * wD + 1 of at least 3 pixels, and can be generally defined by: [Px-wD, y9Px-wD + 1, y5-, Px, y, -, Px + wD-1, y5Px + wD, y] The horizontal morphological dilation has the effect of reducing the influence of blur on the calculation of weights (generally caused by optical aberrations) in the presence of strongly contrasting vertical structures in the scene, extending horizontally adjacent pixels the influence of areas with high horizontal gradient values and / or high variance values. In step 704, the values FD1L-v and pDIL_VAR are normalized for example for each column so that, for a column, their sum is equal to 1. For example, these normalizations are calculated based on the following equations : pDIL _V pV x, yx, Y x, y 25 i, DIL _VAR pVAR x, yx, y I3DIL _VAR x, y The weights wx, y are then determined for example by a linear combination of these normalized values ey and p1 R , giving an estimate of the uniformity in a local area of the image defined by the neighborhood of the pixel. In some embodiments, the gradient and variance values may be of equal influence, in which case the weights wx, y are determined for example by the following equation: imx, y == nv + / V) / Alternatively, a different influence can be assigned to the variance and gradient values by applying a scalar to each value. For example, the weights are determined by the following equation: mixa, = a x Pxv, y b x PVAR -xy where a and b are the scalars, and for example a + b = 1. For example, by choosing b larger than a, such that at least five times larger, an influence of the horizontal variance can be applied. Referring again to FIG. 7, the weights form a weight map 705. In some embodiments, a weight is calculated for each pixel of the image, while in other embodiments Steps 701 to 704 may be adapted to provide weights for only certain pixels, for example for the pixels of each second, third or fourth column of the image in order to reduce the computation time. The next step is step 602 of FIG. 6, in which the column-to-column offsets WWw (x) are determined for example based on the equation below: 1 VCo / '(x) = - Ewx, vx + 1, y - x, y) my = 1 The terms (Ix + 1, y-Ix, y) are the forward differences between column x and column x + 1, and these are calculated for example in step 706 of Figure 7, based on the image In '. In some embodiments, only the forward differences that are below a certain threshold are considered in calculating the average for a given column. Indeed, values above a certain threshold can be considered as representing a vertical edge in the image of the scene that should not be removed. The threshold is chosen for example to be slightly greater than the maximum expected column difference between one column and the next. OffCol offset values (x) are then calculated for example by integrating the valetirs WW, 4.00, as explained above. As mentioned above, OffCol offset values (x) may be subtracted for example from image Ic 'at step 603 to generate a clean image IcR in which column artifacts have been removed from the image, or can be added to the other offsets as shown in Fig. 5. An advantage of the offset correction methods described herein is that they do not require the use of a mechanical shutter, and that they have been found to be very effective. Having thus described at least one illustrative embodiment, many changes, modifications and improvements will readily occur to those skilled in the art. For example, although a specific example of a micro-bolometer is described in connection with Figure 2, it will be apparent to those skilled in the art that the methods described herein can be applied to many other implementations. a bolometer, or other types of IR imaging devices. In addition, it will be appreciated by those skilled in the art that the various steps described in connection with the different embodiments may be carried out in other embodiments in different orders without impacting their efficiencies. For example, the order in which the scaling factors a and p are determined can be changed.
权利要求:
Claims (16)
[0001]
REVENDICATIONS1. An image processing method comprising: receiving, by a processing device (302), an input image (IB) picked up by an array of infrared-sensitive pixels (102), the pixel array having a plurality of columns of pixels (C1, C2) each of which is associated with a reference pixel (106); determining, based on a column component vector (VCOL) representing a column-to-column variation introduced by the pixel array, a first scale factor (a) by estimating the level of said column-to-column variation in the image entrance; generating column offset values (OE-vcoL (Y)) based on the product of the first scale factor by the values of said vector; determining, based on a 2D dispersion matrix (IDIsp) representing a 2D dispersion introduced by the pixel matrix, a second scale factor (p3) by estimating the level of said 2D dispersion present in the input image ; generating pixel offset values (13-IDISP (x, Y)) based on the product of the second scale factor by the values of said matrix; and generating a corrected image (Ic ') by applying said pixel and column offset values.
[0002]
The method of claim 1, further comprising generating a partially corrected image (ICC) based on said column offset values, wherein said second scale factor is generated based on said image partially. corrected.
[0003]
The method of claim 1 or 2, wherein said column vector (VCOL) and said dispersion matrix (IDISP) are determined based on a reference image (IREF) representing offsets introduced by the pixel array. B13214 26
[0004]
The method according to any one of claims 1 to 3, wherein said corrected image (Ic ') is generated based on the equation: Here = IB (x, alcoL (x, - qpisp (x, Y) where IB (x, y) is the input image, a is the first scale factor, IcoL (x, y) is a matrix comprising in each of its rows the column vector, p is the second factor of scale, and IDIsP (x, y) is the dispersion matrix.
[0005]
The method of claim 1 or 2, wherein: the column vector (LICOL) represents the difference between a first column vector (Q) based on a first reference image taken at a first ambient temperature (TO) and a second column vector ('ML') based on a second reference image taken at a second ambient temperature (Tl); the dispersion matrix (IDIsp) represents the difference between a first dispersion matrix (Igsp) based on said first reference image and a second dispersion matrix (Wsp) based on said second reference image.
[0006]
6. The method of claim 5, wherein said corrected image (ICS) is generated based on the equation: = (x, y) -l8L (x, y) - liYsp (x, y) a (IcL ( x, Y)) - y)) where IB (x, y) is the input image, a is the first scale factor, IPL (x, y) is a matrix comprising in each of its lines the first column vector, 4.0L (x, y) is a matrix IT1 equals I COL - (0L-Q) where 'COL is a matrix. comprising in each of its lines the second vector of 30 columns, p is the second scale factor, ii, Ysp (x, y) is the first dispersion matrix and / insp (x, y) is a matrix equal to I 'DISP = (IMP-Osp) where is the second dispersion matrix.
[0007]
The method of any one of claims 1 to 6, further comprising determining at least one column residue offset value (510) based on said corrected image (Ic 1).
[0008]
The method of claim 7, wherein determining said at least one offset value of 5 column residues comprises: determining weights (wx, y) associated with at least some of the pixels of said corrected image, the weights being generated based on an estimate of the neighborhood uniformity of each of said at least some pixels; Computing, for each of said at least some pixels, the difference from a pixel value in the corresponding line of an adjacent column; and applying the weights to said differences and integrating the weighted differences to generate said at least one column residue offset value.
[0009]
The method of claim 8, wherein estimating the uniformity of the neighborhood of each of said at least some pixels is based on a gradient value (P ° y) and a horizontal variance value calculated for each neighborhood.
[0010]
The method of any one of claims 1 to 9, wherein determining the first scale factor comprises applying a high pass filter along the lines of the image; applying said high-pass filter to the column vector; and determining column averages of the filtered image, the first scale factor being determined based on minimizing the differences between the column means of the filtered image and the filtered values of the column vector.
[0011]
11.A method according to any one of claims 1 to 10, wherein the first scale factor a is determined based on the following equation: B13214 28 Ex GIT-, Ey T (/ B (x, y )) x T (vcoL (x))) ExT (VcoL (x))> <T (17coL (x)) where T () is a high-pass filter applied to the column vector Vcol, and to the lines of the input image IB (x, y).
[0012]
The method of any one of claims 1 to 11, wherein determining the second scale factor comprises: determining, for each pixel of the input image and for each element of the dispersion matrix, a gradient value based on at least one adjacent pixel, wherein the second scale factor is determined based on minimizing the difference between the input image gradients and the gradients of the input matrix. dispersion.
[0013]
The method of any one of claims 1 to 12, wherein the second scaling factor p is determined based on the following equation: = E (vx / B vx / D / sp + V yIDISP) ) Ee'xIDISP) 2 (VyIDISP) 2) where IB is the input image, Insp is the dispersion matrix, VX is the value of the pixel gradient between adjacent pixels in the direction of the lines in the image d input, and Vy is the pixel gradient value in the direction of the columns in the input image.
[0014]
The method of any of claims 1 to 13, wherein said column and pixel offset values are applied to an additional input image. 25
[0015]
The method of any one of claims 1 to 14, further comprising determining a gain correction value (y), based on said corrected image, by minimizing the variance between the image multiplied by a gain, and gain. 30
[0016]
An image processing apparatus comprising: a memory (306) storing a column vector (VCOL) representing a column-to-column variation introduced by the pixel array, and a scatter matrix (IDIsp) representing a 2D dispersion introduced by the pixel matrix; a processing device (302) adapted to: receive an input image (I13) picked up by a matrix of pixels (102) sensitive to the infrared, the pixel matrix comprising several columns of pixels (C1, C2), so each is associated with a reference pixel (106); determining, based on the column vector, a first scale factor (a) by estimating a level of said column-to-column variation present in the input image; generating column offset values (a.VCOL (x)) based on the product of the first scale factor by the values of said vector; Determining, based on the dispersion matrix, a second scale factor ((3) by estimating the level of said 2D dispersion present in the input image; generating pixel offset values (P-IDISP) (x, Y)) based on the product of the second scale factor by the values of said matrix, and generating a corrected image (Ic ') by applying said column and pixel offset values.
类似技术:
公开号 | 公开日 | 专利标题
EP2940991B1|2017-09-27|Method for processing an infrared image for correction of non-uniformities
FR2882160A1|2006-08-18|Video image capturing method for e.g. portable telephone, involves measuring local movements on edges of images and estimating parameters of global movement model in utilizing result of measurement of local movements
Tendero et al.2010|Efficient single image non-uniformity correction algorithm
EP2833622B1|2015-11-04|Diagnosis of the faulty state of a bolometric detection matrix
EP3314888B1|2019-02-06|Correction of bad pixels in an infrared image-capturing apparatus
EP3314887B1|2020-08-19|Detection of bad pixels in an infrared image-capturing apparatus
EP0393763A1|1990-10-24|Method for correcting offset dispersions in photoelectric sensors and correcting device therefor
EP3860113A1|2021-08-04|Method for correcting defects and in particular for reducing the noise in an image supplied by image sensor
EP1368965B1|2012-08-08|Method and device for electronic image sensing in several zones
WO2020221774A1|2020-11-05|Method and device for removing remanence in an infrared image of a static scene
EP3963539A1|2022-03-09|Method and device for removing remanence in an infrared image of a changing scene
EP2135220B1|2010-07-14|Method for correcting the spatial noise of an image sensor by luminance limitation
FR3066271B1|2019-07-12|MOTION SENSOR AND IMAGE SENSOR
FR3103939A1|2021-06-04|PROCESS FOR CAPTURING IMAGES USING SENSITIVE ELEMENTS WITH MEMORY EFFECT
FR3062009A1|2018-07-20|ADAPTIVE GENERATION OF A DYNAMICALLY ENHANCED SCENE IMAGE OF A SCENE FROM A PLURALITY OF IMAGES OBTAINED BY NON-DESTRUCTIVE READING OF AN IMAGE SENSOR
EP0810779A1|1997-12-03|Pixel signals processing method in a semi-conductor camera and semi-conductor camera for carrying out the method
FR2933796A1|2010-01-15|Image processing apparatus for homing head i.e. infra-red homing head on self-guided missile, has subtraction module subtracting intensity function from output intensity function to provide non-polarized target and background data
同族专利:
公开号 | 公开日
FR3020735B1|2017-09-15|
CA2889654A1|2015-10-30|
US20150319387A1|2015-11-05|
CN105049752B|2019-09-27|
RU2015115885A|2016-11-20|
CN105049752A|2015-11-11|
KR20150125609A|2015-11-09|
RU2677576C2|2019-01-17|
EP2940991A1|2015-11-04|
JP2015212695A|2015-11-26|
US10015425B2|2018-07-03|
JP6887597B2|2021-06-16|
EP2940991B1|2017-09-27|
RU2015115885A3|2018-11-01|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
WO2004107728A2|2003-05-23|2004-12-09|Candela Microsystems|Image sensor with dark signal reduction|
US20100085438A1|2008-10-02|2010-04-08|Altasens, Inc.|Digital column gain mismatch correction for 4t cmos imaging systems-on-chip|
US20130314536A1|2009-03-02|2013-11-28|Flir Systems, Inc.|Systems and methods for monitoring vehicle occupants|
EP2632150A2|2012-02-22|2013-08-28|Ulis|Method for correcting the drift of an infrared radiation detector comprising an array of resistive imaging bolometers and device implementing such a method|
US5471240A|1993-11-15|1995-11-28|Hughes Aircraft Company|Nonuniformity correction of an imaging sensor using region-based correction terms|
JP3024532B2|1995-12-15|2000-03-21|日本電気株式会社|Thermal infrared imaging device|
JP3226859B2|1997-11-17|2001-11-05|日本電気株式会社|Imaging device|
JP3149919B2|1997-12-03|2001-03-26|日本電気株式会社|Solid-state imaging device and readout circuit using the same|
JP4396425B2|2004-07-07|2010-01-13|ソニー株式会社|Solid-state imaging device and signal processing method|
GB0625936D0|2006-12-28|2007-02-07|Thermoteknix Systems Ltd|Correction of non-uniformity of response in sensor arrays|
JP4991435B2|2007-08-02|2012-08-01|キヤノン株式会社|Imaging device|
US8655614B2|2007-11-30|2014-02-18|The Boeing Company|Position only fit, POF, algorithm for blur spot target tracking and discrimination|
US7995859B2|2008-04-15|2011-08-09|Flir Systems, Inc.|Scene based non-uniformity correction systems and methods|
RU2452992C1|2008-05-22|2012-06-10|МАТРИКС ЭЛЕКТРОНИК МЕЖЕРИНГ ПРОПЕРТИЗ, ЭлЭлСи|Stereoscopic measuring system and method|
JP5640316B2|2009-02-24|2014-12-17|株式会社ニコン|Imaging device|
US9237284B2|2009-03-02|2016-01-12|Flir Systems, Inc.|Systems and methods for processing infrared images|
CN102687502B|2009-08-25|2015-07-08|双光圈国际株式会社|Reducing noise in a color image|
EP2679000B1|2011-02-25|2016-11-02|Photonis Netherlands B.V.|Acquiring and displaying images in real-time|
EP2693742B1|2011-03-30|2016-08-31|Sony Corporation|A/d converter, solid-state imaging device and drive method, as well as electronic apparatus|
JP5871496B2|2011-06-24|2016-03-01|キヤノン株式会社|Imaging apparatus and driving method thereof|
CN102385701B|2011-10-14|2013-09-18|华中科技大学|Ununiformity correction method of scanning type infrared imaging system|
JP6045220B2|2012-06-27|2016-12-14|キヤノン株式会社|Imaging device and imaging apparatus|
CN102779332A|2012-07-09|2012-11-14|中国人民解放军国防科学技术大学|Nonlinear-fitting infrared non-uniform correction method based on time-domain Kalman filtering correction|
US20140028861A1|2012-07-26|2014-01-30|David Holz|Object detection and tracking|
CN102855610B|2012-08-03|2015-11-04|南京理工大学|Adopt the Infrared Image Non-uniformity Correction method of the parameter correctness factor|
CN103076096A|2013-01-07|2013-05-01|南京理工大学|Infrared nonuniformity correcting algorithm based on mid-value histogram balance|CN108353135B|2015-10-29|2020-07-21|富士胶片株式会社|Infrared imaging device and signal correction method using the same|
CN106855435B|2016-11-15|2019-04-09|北京空间机电研究所|Heterogeneity real-time correction method on long wave linear array infrared camera star|
CN108665422A|2017-08-30|2018-10-16|西安电子科技大学|The infrared heterogeneity detection method of single frames inversely perceived in Fourier|
US10803557B2|2017-12-26|2020-10-13|Xidian University|Non-uniformity correction method for infrared image based on guided filtering and high-pass filtering|
KR102110399B1|2018-06-19|2020-05-13|전주대학교 산학협력단|Method for compensating the infrared deterioration to measure internal or surface defects of the subject body|
FR3083901B1|2018-07-10|2021-10-08|Schneider Electric Ind Sas|IMAGE PROCESSING METHOD|
KR101955498B1|2018-07-19|2019-03-08|엘아이지넥스원 주식회사|Infrared image correction apparatus using neural network structure and method thereof|
FR3088512B1|2018-11-09|2020-10-30|Schneider Electric Ind Sas|PROCESS FOR PROCESSING AN IMAGE|
FR3111699A1|2020-06-23|2021-12-24|Schneider Electric Industries Sas|PROCESS FOR PROCESSING A RAW IMAGE COLLECTED BY A BOLOMETER DETECTOR AND ASSOCIATED DEVICE|
CN112393807B|2020-11-23|2021-11-23|昆明物理研究所|Infrared image processing method, device, system and computer readable storage medium|
法律状态:
2015-04-16| PLFP| Fee payment|Year of fee payment: 2 |
2015-11-06| PLSC| Publication of the preliminary search report|Effective date: 20151106 |
2016-04-26| PLFP| Fee payment|Year of fee payment: 3 |
2017-04-24| PLFP| Fee payment|Year of fee payment: 4 |
2018-04-24| PLFP| Fee payment|Year of fee payment: 5 |
2020-01-10| ST| Notification of lapse|Effective date: 20191206 |
优先权:
申请号 | 申请日 | 专利标题
FR1453917A|FR3020735B1|2014-04-30|2014-04-30|METHOD FOR PROCESSING AN INFRARED IMAGE FOR NON-UNIFORMITY CORRECTION|FR1453917A| FR3020735B1|2014-04-30|2014-04-30|METHOD FOR PROCESSING AN INFRARED IMAGE FOR NON-UNIFORMITY CORRECTION|
EP15164692.4A| EP2940991B1|2014-04-30|2015-04-22|Method for processing an infrared image for correction of non-uniformities|
US14/695,539| US10015425B2|2014-04-30|2015-04-24|Method of infrared image processing for non-uniformity correction|
RU2015115885A| RU2677576C2|2014-04-30|2015-04-27|Method of processing infrared images for heterogeneity correction|
CA2889654A| CA2889654A1|2014-04-30|2015-04-28|Treatment process for correction of non-uniformities in an infrared image|
CN201510213875.4A| CN105049752B|2014-04-30|2015-04-29|For the modified Infrared Image Processing Method of heterogeneity|
JP2015093484A| JP6887597B2|2014-04-30|2015-04-30|Infrared image processing methods and image processing devices for non-uniform correction|
KR1020150061243A| KR20150125609A|2014-04-30|2015-04-30|Method of infrared image processing for non-uniformity correction|
[返回顶部]